在最近,对表现良好的神经体系结构(NAS)的高效,自动化的搜索引起了人们的关注。因此,主要的研究目标是减少对神经体系结构进行昂贵评估的必要性,同时有效地探索大型搜索空间。为此,替代模型将体系结构嵌入了潜在的空间并预测其性能,而神经体系结构的生成模型则可以在生成器借鉴的潜在空间内基于优化的搜索。替代模型和生成模型都具有促进结构良好的潜在空间中的查询搜索。在本文中,我们通过利用有效的替代模型和生成设计的优势来进一步提高查询效率和有前途的建筑生成之间的权衡。为此,我们提出了一个与替代预测指标配对的生成模型,该模型迭代地学会了从越来越有希望的潜在子空间中生成样品。这种方法可导致非常有效和高效的架构搜索,同时保持查询量较低。此外,我们的方法允许以一种直接的方式共同优化准确性和硬件延迟等多个目标。我们展示了这种方法的好处,不仅是W.R.T.优化体系结构以提高最高分类精度,但在硬件约束和在单个NAS基准测试中的最新方法和多个目标的最先进方法的优化。我们还可以在Imagenet上实现最先进的性能。该代码可在http://github.com/jovitalukasik/ag-net上找到。
translated by 谷歌翻译
Large language models (LLMs) have led to a series of breakthroughs in natural language processing (NLP), owing to their excellent understanding and generation abilities. Remarkably, what further sets these models apart is the massive amounts of world knowledge they internalize during pretraining. While many downstream applications provide the model with an informational context to aid its performance on the underlying task, how the model's world knowledge interacts with the factual information presented in the context remains under explored. As a desirable behavior, an LLM should give precedence to the context whenever it contains task-relevant information that conflicts with the model's memorized knowledge. This enables model predictions to be grounded in the context, which can then be used to update or correct specific model predictions without frequent retraining. By contrast, when the context is irrelevant to the task, the model should ignore it and fall back on its internal knowledge. In this paper, we undertake a first joint study of the aforementioned two properties, namely controllability and robustness, in the context of LLMs. We demonstrate that state-of-the-art T5 and PaLM (both pretrained and finetuned) could exhibit poor controllability and robustness, which do not scale with increasing model size. As a solution, we propose a novel method - Knowledge Aware FineTuning (KAFT) - to strengthen both controllability and robustness by incorporating counterfactual and irrelevant contexts to standard supervised datasets. Our comprehensive evaluation showcases the utility of KAFT across model architectures and sizes.
translated by 谷歌翻译
事实证明,知识蒸馏是使用教师模型的预测来改善学生模型的一项有效技术。但是,最近的工作表明,在数据中的亚组中,平均效率的提高并不统一,尤其是在稀有亚组和类别上的准确性通常可能以准确性为代价。为了在可能遵循长尾分配的课程中保持强劲的表现,我们开发了蒸馏技术,这些技术是为了改善学生最差的级别表现而定制的。具体来说,我们为教师和学生介绍了不同组合的强大优化目标,并进一步允许在整体准确性和强大的最差目标之间进行任何权衡训练。我们从经验上表明,与其他基线方法相比,我们强大的蒸馏技术不仅可以实现更好的最差级别性能,而且还可以改善整体性能和最差的级别性能之间的权衡。从理论上讲,我们提供有关在目标培训健壮学生时使一名好老师的见解。
translated by 谷歌翻译